14 research outputs found

    Machine Learning-powered Combinatorial Clock Auction

    Full text link
    We study the design of iterative combinatorial auctions (ICAs). The main challenge in this domain is that the bundle space grows exponentially in the number of items. To address this, several papers have recently proposed machine learning (ML)-based preference elicitation algorithms that aim to elicit only the most important information from bidders. However, from a practical point of view, the main shortcoming of this prior work is that those designs elicit bidders' preferences via value queries (i.e., ``What is your value for the bundle {A,B}\{A,B\}?''). In most real-world ICA domains, value queries are considered impractical, since they impose an unrealistically high cognitive burden on bidders, which is why they are not used in practice. In this paper, we address this shortcoming by designing an ML-powered combinatorial clock auction that elicits information from the bidders only via demand queries (i.e., ``At prices pp, what is your most preferred bundle of items?''). We make two key technical contributions: First, we present a novel method for training an ML model on demand queries. Second, based on those trained ML models, we introduce an efficient method for determining the demand query with the highest clearing potential, for which we also provide a theoretical foundation. We experimentally evaluate our ML-based demand query mechanism in several spectrum auction domains and compare it against the most established real-world ICA: the combinatorial clock auction (CCA). Our mechanism significantly outperforms the CCA in terms of efficiency in all domains, it achieves higher efficiency in a significantly reduced number of rounds, and, using linear prices, it exhibits vastly higher clearing potential. Thus, with this paper we bridge the gap between research and practice and propose the first practical ML-powered ICA

    Bayesian Optimization-based Combinatorial Assignment

    Full text link
    We study the combinatorial assignment domain, which includes combinatorial auctions and course allocation. The main challenge in this domain is that the bundle space grows exponentially in the number of items. To address this, several papers have recently proposed machine learning-based preference elicitation algorithms that aim to elicit only the most important information from agents. However, the main shortcoming of this prior work is that it does not model a mechanism's uncertainty over values for not yet elicited bundles. In this paper, we address this shortcoming by presenting a Bayesian Optimization-based Combinatorial Assignment (BOCA) mechanism. Our key technical contribution is to integrate a method for capturing model uncertainty into an iterative combinatorial auction mechanism. Concretely, we design a new method for estimating an upper uncertainty bound that can be used as an acquisition function to determine the next query to the agents. This enables the mechanism to properly explore (and not just exploit) the bundle space during its preference elicitation phase. We run computational experiments in several spectrum auction domains to evaluate BOCA's performance. Our results show that BOCA achieves higher allocative efficiency than state-of-the-art approaches

    NOMU: Neural Optimization-based Model Uncertainty

    Full text link
    We study methods for estimating model uncertainty for neural networks (NNs). To isolate the effect of model uncertainty, we focus on a noiseless setting with scarce training data. We introduce five important desiderata regarding model uncertainty that any method should satisfy. However, we find that established benchmarks often fail to reliably capture some of these desiderata, even those that are required by Bayesian theory. To address this, we introduce a new approach for capturing model uncertainty for NNs, which we call Neural Optimization-based Model Uncertainty (NOMU). The main idea of NOMU is to design a network architecture consisting of two connected sub-NNs, one for model prediction and one for model uncertainty, and to train it using a carefully-designed loss function. Importantly, our design enforces that NOMU satisfies our five desiderata. Due to its modular architecture, NOMU can provide model uncertainty for any given (previously trained) NN if given access to its training data. We first experimentally study noiseless regression with scarce training data to highlight the deficiencies of the established benchmarks. Finally, we study the important task of Bayesian optimization (BO) with costly evaluations, where good model uncertainty estimates are essential. Our results show that NOMU performs as well or better than state-of-the-art benchmarks.Comment: 9 pages + appendi

    Machine Learning-powered Course Allocation

    Full text link
    We introduce a machine learning-powered course allocation mechanism. Concretely, we extend the state-of-the-art Course Match mechanism with a machine learning-based preference elicitation module. In an iterative, asynchronous manner, this module generates pairwise comparison queries that are tailored to each individual student. Regarding incentives, our machine learning-powered course match (MLCM) mechanism retains the attractive strategyproofness in the large property of Course Match. Regarding welfare, we perform computational experiments using a simulator that was fitted to real-world data. Our results show that, compared to Course Match, MLCM increases average student utility by 4%-9% and minimum student utility by 10%-21%, even with only ten comparison queries. Finally, we highlight the practicability of MLCM and the ease of piloting it for universities currently using Course Match

    Fourier Analysis-based Iterative Combinatorial Auctions

    Full text link
    Recent advances in Fourier analysis have brought new tools to efficiently represent and learn set functions. In this paper, we bring the power of Fourier analysis to the design of combinatorial auctions (CAs). The key idea is to approximate bidders' value functions using Fourier-sparse set functions, which can be computed using a relatively small number of queries. Since this number is still too large for real-world CAs, we propose a new hybrid design: we first use neural networks to learn bidders' values and then apply Fourier analysis to the learned representations. On a technical level, we formulate a Fourier transform-based winner determination problem and derive its mixed integer program formulation. Based on this, we devise an iterative CA that asks Fourier-based queries. We experimentally show that our hybrid ICA achieves higher efficiency than prior auction designs, leads to a fairer distribution of social welfare, and significantly reduces runtime. With this paper, we are the first to leverage Fourier analysis in CA design and lay the foundation for future work in this area

    Deep Learning-powered Iterative Combinatorial Auctions

    Full text link
    In this paper, we study the design of deep learning-powered iterative combinatorial auctions (ICAs). We build on prior work where preference elicitation was done via kernelized support vector regressions (SVRs). However, the SVR-based approach has limitations because it requires solving a machine learning (ML)-based winner determination problem (WDP). With expressive kernels (like gaussians), the ML-based WDP cannot be solved for large domains. While linear or quadratic kernels have better computational scalability, these kernels have limited expressiveness. In this work, we address these shortcomings by using deep neural networks (DNNs) instead of SVRs. We first show how the DNN-based WDP can be reformulated into a mixed integer program (MIP). Second, we experimentally compare the prediction performance of DNNs against SVRs. Third, we present experimental evaluations in two medium-sized domains which show that even ICAs based on relatively small-sized DNNs lead to higher economic efficiency than ICAs based on kernelized SVRs. Finally, we show that our DNN-powered ICA also scales well to very large CA domains

    On the modelling of limit order books using Markov chains in continuous time

    No full text
    Abweichender Titel nach Übersetzung der Verfasserin/des VerfassersDiese Diplomarbeit befasst sich mit der Modellierung des Limit Orderbuches durch einen zeitstetigen Markovprozess. Im ersten Kapitel erläutern wir zentrale Begriffe zu elektronischen Märkten und dem Limit Order Buch. Danach präsentieren wir eine allgemeine Modellbeschreibung des Orderbuchvolumens durch einen zeitstetigen 2K-dimensionalen Markovprozess X unter Annahme eines konstanten Referenzpreises. Hierbei wird es uns gelingen die Existenz einer invarianten Verteilung von X unter gewissen Regularitätsvoraussetzungen zu beweisen. Des Weiteren werden grundlegende Eigenschaften wie Irreduzibilität, Explosivität und Rekurrenz des Markovprozesses X diskutiert. In Kapitel 3 präsentieren wir das erste Modell, welches das Orderbuch in Zeiten eines konstanten Referenzpreises beschreibt. Unter der Unabhängigkeitsannahme der einzelnen Komponenten, erhalten wir eine explizite Darstellung der invarianten Verteilung von X im Modell I. Hierbei diskutieren wir auch die Schätzung der Intensitäten des Markovprozesses und den Zusammenhang zu dessen assoziierten Sprungprozess. Eine einfache Schätzung des Referenzpreises aus gegebenen Daten wird in Kapitel 4 dargestellt. Kapitel 5 widmet sich der Beschreibung der LOBSTER Datenbank, welche uns als Grundlage für die empirischen Studie dient. Dabei erklären wir auch die Methodik zur Schätzung der Intensitäten aus den zugrunde liegenden Daten. In Kapitel 6 führen wir eine empirische Studie zur Orderbuchmodellierung mittels Modell I für die beiden Aktien France Telekom und Microsoft Corporation durch. Wir präsentieren dazu die geschätzten Intensitäten und invarianten Verteilungen und vergleichen diese mit den empirisch beobachteten Werten aus den Daten. In Kapitel 7 erläutern wir einen Algorithmus zu Simulation der einzelnen Preislimits aus Modell I und besprechen als konkretes Anwendungsbeispiel die Berechnung der Wahrscheinlichkeit einer Exekution einer Limit Order. Im Kapitel 8 besprechen wir schlussendlich Modell III. Mittels diesem kann nun das Orderbuchvolumen für einen beliebigen Zeithorizont mit einem dynamischem Referenzpreis beschrieben werden. Somit stellt Modell III eine Erweiterung von Modell I dar. Zudem präsentieren wir einen Algorithmus zur Simulation des Orderbuchvolumens im Modell III und besprechen eine mögliche Variante zur Kalibrierung der Parameter. Im letzten Kapitel ist der verwendete Quellcode in R dargestellt.This diploma thesis deals with the modeling of the Limit Order Book using a 2K-dimensional markov process X in a continous time setting. The first chapter serves as a general introduction to electronic markets and the limit order book. In chapter 2 we present a general framework of the model under the assumption of a fixed reference price. We will thereby proove, that under certain regularity conditions the markov process X admits an invariant distribution. Furthermore, properties such as irreducibility, explosivity and recurrence concerning the markov process X will be discussed. Chapter 3 presents the first specific model for the orderbook volume considering time periods with constant reference price under the additional assumption of componentwise independence. Within the framework of Model I it will be possible to obtain an explicit formula for the invariant distribution. Moreover we will discuss parameter estimation and the relation of X to the associated discrete time jump process Z in the setting of Model I. In chapter 4 a simple estimation for the reference price using empirical data is discussed. Chapter 5 is devoted to the presentation of the LOBSTER data base, that will be the foundation for all our empirical studies. As part of this chapter we will also explain the methodology of the parameter estimation from underlying data. In Chapter 6 we will conduct an empirical study for the two stocks France Telekom and Microsoft Corporation in the setting of Model I. Chapter 7 presents an algorithm for modelling an individual price limit within Model I. As an application we will discuss how to estimate the probability of an execution of a Limit Order. In Chapter 8 we finally present a model for the whole period of time that includes a dynamic reference price. In addition we present an algorithm for simulating the orderbook volume as specified in Model III and discuss how parameter calibration can be performed. In the last chapter we provide some source code in R.9

    Monotone-Value Neural Networks: Exploiting Preference Monotonicity in Combinatorial Assignment

    Full text link
    Many important resource allocation problems involve the combinatorial assignment of items, e.g., auctions or course allocation. Because the bundle space grows exponentially in the number of items, preference elicitation is a key challenge in these domains. Recently, researchers have proposed ML-based mechanisms that outperform traditional mechanisms while reducing preference elicitation costs for agents. However, one major shortcoming of the ML algorithms that were used is their disregard of important prior knowledge about agents' preferences. To address this, we introduce monotone-value neural networks (MVNNs), which are designed to capture combinatorial valuations, while enforcing monotonicity and normality. On a technical level, we prove that our MVNNs are universal in the class of monotone and normalized value functions, and we provide a mixed-integer linear program (MILP) formulation to make solving MVNN-based winner determination problems (WDPs) practically feasible. We evaluate our MVNNs experimentally in spectrum auction domains. Our results show that MVNNs improve the prediction performance, they yield state-of-the-art allocative efficiency in the auction, and they also reduce the run-time of the WDPs. Our code is available on GitHub: https://github.com/marketdesignresearch/MVNN

    Monotone-Value Neural Networks: Exploiting Preference Monotonicity in Combinatorial Assignment

    Full text link
    Many important resource allocation problems involve the combinatorial assignment of items, e.g., auctions or course allocation. Because the bundle space grows exponentially in the number of items, preference elicitation is a key challenge in these domains. Recently, researchers have proposed ML-based mechanisms that outperform traditional mechanisms while reducing preference elicitation costs for agents. However, one major shortcoming of the ML algorithms that were used is their disregard of important prior knowledge about agents' preferences. To address this, we introduce monotone-value neural networks (MVNNs), which are designed to capture combinatorial valuations, while enforcing monotonicity and normality. On a technical level, we prove that our MVNNs are universal in the class of monotone and normalized value functions, and we provide a mixed-integer linear program (MILP) formulation to make solving MVNN-based winner determination problems (WDPs) practically feasible. We evaluate our MVNNs experimentally in spectrum auction domains. Our results show that MVNNs improve the prediction performance, they yield state-of-the-art allocative efficiency in the auction, and they also reduce the run-time of the WDPs. Our code is available on GitHub: https://github.com/marketdesignresearch/MVNN
    corecore